Goto

Collaborating Authors

 forward operator






On the Sample Complexity of Learning for Blind Inverse Problems

Buskulic, Nathan, Calatroni, Luca, Rosasco, Lorenzo, Villa, Silvia

arXiv.org Machine Learning

Blind inverse problems arise in many experimental settings where the forward operator is partially or entirely unknown. In this context, methods developed for the non-blind case cannot be adapted in a straightforward manner. Recently, data-driven approaches have been proposed to address blind inverse problems, demonstrating strong empirical performance and adaptability. However, these methods often lack interpretability and are not supported by rigorous theoretical guarantees, limiting their reliability in applied domains such as imaging inverse problems. In this work, we shed light on learning in blind inverse problems within the simplified yet insightful framework of Linear Minimum Mean Square Estimators (LMMSEs). We provide an in-depth theoretical analysis, deriving closed-form expressions for optimal estimators and extending classical results. In particular, we establish equivalences with suitably chosen Tikhonov-regularized formulations, where the regularization depends explicitly on the distributions of the unknown signal, the noise, and the random forward operators. We also prove convergence results under appropriate source condition assumptions. Furthermore, we derive rigorous finite-sample error bounds that characterize the performance of learned estimators as a function of the noise level, problem conditioning, and number of available samples. These bounds explicitly quantify the impact of operator randomness and reveal the associated convergence rates as this randomness vanishes. Finally, we validate our theoretical findings through illustrative numerical experiments that confirm the predicted convergence behavior.


Self-Supervised Learning from Noisy and Incomplete Data

Tachella, Julián, Davies, Mike

arXiv.org Machine Learning

Many important problems in science and engineering involve inferring a signal from noisy and/or incomplete observations, where the observation process is known. Historically, this problem has been tackled using hand-crafted regularization (e.g., sparsity, total-variation) to obtain meaningful estimates. Recent data-driven methods often offer better solutions by directly learning a solver from examples of ground-truth signals and associated observations. However, in many real-world applications, obtaining ground-truth references for training is expensive or impossible. Self-supervised learning methods offer a promising alternative by learning a solver from measurement data alone, bypassing the need for ground-truth references. This manuscript provides a comprehensive summary of different self-supervised methods for inverse problems, with a special emphasis on their theoretical underpinnings, and presents practical applications in imaging inverse problems.


Inverse Problems Leveraging Pre-trained Contrastive Representations

Neural Information Processing Systems

We study a new family of inverse problems for recovering representations of corrupted data. We assume access to a pre-trained representation learning network R(x) that operates on clean images, like CLIP. The problem is to recover the representation of an image R(x), if we are only given a corrupted version A(x), for some known forward operator A. We propose a supervised inversion method that uses a contrastive objective to obtain excellent representations for highly corrupted images. Using a linear probe on our robust representations, we achieve a higher accuracy than end-to-end supervised baselines when classifying images with various types of distortions, including blurring, additive noise, and random pixel masking. We evaluate on a subset of ImageNet and observe that our method is robust to varying levels of distortion. Our method outperforms end-to-end baselines even with a fraction of the labeled data in a wide range of forward operators.


Learned iterative networks: An operator learning perspective

Hauptmann, Andreas, Öktem, Ozan

arXiv.org Artificial Intelligence

Learned image reconstruction has become a pillar in computational imaging and inverse problems. Among the most successful approaches are learned iterative networks, which are formulated by unrolling classical iterative optimisation algorithms for solving variational problems. While the underlying algorithm is usually formulated in the functional analytic setting, learned approaches are often viewed as purely discrete. In this chapter we present a unified operator view for learned iterative networks. Specifically, we formulate a learned reconstruction operator, defining how to compute, and separately the learning problem, which defines what to compute. In this setting we present common approaches and show that many approaches are closely related in their core. We review linear as well as nonlinear inverse problems in this framework and present a short numerical study to conclude.



How Regularization Terms Make Invertible Neural Networks Bayesian Point Estimators

Heilenkötter, Nick

arXiv.org Artificial Intelligence

Whenever a quantity of interest cannot be observed directly but only through an indirect measurement process or in the presence of noise, one is faced with an inverse problem. To stabilize the reconstruction and mitigate the information loss inherent in the measurement, it is necessary to incorporate additional knowledge about the unknown data -- its prior distribution, which encodes what one expects the reconstruction to resemble, such as the characteristic features of natural images. Yet our ability to describe natural images in an explicit, algorithmic form remains quite limited. Fortunately, recent years have seen the emergence of data-driven approaches that enable the construction of priors directly from collections of representative samples. While these approaches often surpass classical methods in reconstruction quality, many of them lack theoretical guarantees and remain difficult to interpret. A promising direction explored recently [3, 4, 5, 21] involves invertible neural networks. Thanks to their bidirectional structure, a single network can simultaneously approximate the forward operator and serve as a reconstruction method, with stability ensured by the architecture itself. This hybrid use makes it possible to assess deviations from a known forward operator - or even replace it by a data-based version - while maintaining interpretability of the reconstruction process by the learned measurement model and vice versa. This dual capability is particularly relevant in applications where both high-fidelity reconstructions and a faithful representation of the measurement process are critical, such as scientific imaging and med-Preprint.